30 research outputs found
Greedy Algorithms for Approximating the Diameter of Machine Learning Datasets in Multidimensional Euclidean Space: Experimental Results
Finding the diameter of a dataset in multidimensional Euclidean space is a well-established problem, with well-known algorithms. However, most of the algorithms found in the literature do not scale well with large values of data dimension, so the time complexity grows exponentially in most cases, which makes these algorithms impractical. Therefore, we implemented 4 simple greedy algorithms to be used for approximating the diameter of a multidimensional dataset; these are based on minimum/maximum l2 norms, hill climbing search, Tabu search and Beam search approaches, respectively. The time complexity of the implemented algorithms is near-linear, as they scale near-linearly with data size and its dimensions. The results of the experiments (conducted on different machine learning data sets) prove the efficiency of the implemented algorithms and can therefore be recommended for finding the diameter to be used by different machine learning applications when needed
Visual Passwords Using Automatic Lip Reading
This paper presents a visual passwords system to increase security. The
system depends mainly on recognizing the speaker using the visual speech signal
alone. The proposed scheme works in two stages: setting the visual password
stage and the verification stage. At the setting stage the visual passwords
system request the user to utter a selected password, a video recording of the
user face is captured, and processed by a special words-based VSR system which
extracts a sequence of feature vectors. In the verification stage, the same
procedure is executed, the features will be sent to be compared with the stored
visual password. The proposed scheme has been evaluated using a video database
of 20 different speakers (10 females and 10 males), and 15 more males in
another video database with different experiment sets. The evaluation has
proved the system feasibility, with average error rate in the range of 7.63% to
20.51% at the worst tested scenario, and therefore, has potential to be a
practical approach with the support of other conventional authentication
methods such as the use of usernames and passwords
Visual Speech Recognition
Lip reading is used to understand or interpret speech without hearing it, a
technique especially mastered by people with hearing difficulties. The ability
to lip read enables a person with a hearing impairment to communicate with
others and to engage in social activities, which otherwise would be difficult.
Recent advances in the fields of computer vision, pattern recognition, and
signal processing has led to a growing interest in automating this challenging
task of lip reading. Indeed, automating the human ability to lip read, a
process referred to as visual speech recognition (VSR) (or sometimes speech
reading), could open the door for other novel related applications. VSR has
received a great deal of attention in the last decade for its potential use in
applications such as human-computer interaction (HCI), audio-visual speech
recognition (AVSR), speaker recognition, talking heads, sign language
recognition and video surveillance. Its main aim is to recognise spoken word(s)
by using only the visual signal that is produced during speech. Hence, VSR
deals with the visual domain of speech and involves image processing,
artificial intelligence, object detection, pattern recognition, statistical
modelling, etc.Comment: Speech and Language Technologies (Book), Prof. Ivo Ipsic (Ed.), ISBN:
978-953-307-322-4, InTech (2011
On Identifying Terrorists Using Their Victory Signs
In certain cases, the only evidence to identify terrorists, who are seen in digital images or videos is their hands’ shapes, particularly, the victory sign as performed by many of them when they intentionally hide their faces, and/or distort their voices. This paper proposes new methods to identify those persons for the first time from their victory sign. These methods are based on features extracted from the fingers areas using shape moments in addition to other features related to fingers contours. To evaluate the proposed methods and to show the feasibility of this study we have created a victory sign database for 400 volunteers using a mobile phone camera. The experimental results using different classifiers show encouraging identification results; as the best precision/recall were achieved by merging normalized features from both methods using linear discriminate analysis classifier with 96.6% precision and 96.3 recall. Such a high performance achieved by the proposed methods shows their great potential to be applied for terrorists’ identification from their victory sign